skip to main content


Search for: All records

Creators/Authors contains: "Nazer, Bobak"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. null (Ed.)
    Trace conditioning and extinction learning depend on the hippocampus, but it remains unclear how neural activity in the hippocampus is modulated during these two different behavioral processes. To explore this question, we performed calcium imaging from a large number of individual CA1 neurons during both trace eye-blink conditioning and subsequent extinction learning in mice. Our findings reveal that distinct populations of CA1 cells contribute to trace conditioned learning versus extinction learning, as learning emerges. Furthermore, we examined network connectivity by calculating co-activity between CA1 neuron pairs and found that CA1 network connectivity patterns also differ between conditioning and extinction, even though the overall connectivity density remains constant. Together, our results demonstrate that distinct populations of hippocampal CA1 neurons, forming different sub-networks with unique connectivity patterns, encode different aspects of learning. 
    more » « less
  2. null (Ed.)
    We present novel information-theoretic limits on detecting sparse changes in Isingmodels, a problem that arises in many applications where network changes can occur due to some external stimuli. We show that the sample complexity for detecting sparse changes, in a minimax sense, is no better than learning the entire model even in settings with local sparsity. This is a surprising fact in light of prior work rooted in sparse recovery methods, which suggest that sample complexity in this context scales only with the number of network changes. To shed light on when change detection is easier than structured learning, we consider testing of edge deletion in forest-structured graphs, and high-temperature ferromagnets as case studies. We show for these that testing of small changes is similarly hard, but testing of large changes is well-separated from structure learning. These results imply that testing of graphical models may not be amenable to concepts such as restricted strong convexity leveraged for sparsity pattern recovery, and algorithm development instead should be directed towards detection of large changes. 
    more » « less